102 research outputs found

    Speech‑derived haptic stimulation enhances speech recognition in a multi‑talker background

    Get PDF
    Published: 03 October 2023Speech understanding, while effortless in quiet conditions, is challenging in noisy environments. Previous studies have revealed that a feasible approach to supplement speech-in-noise (SiN) perception consists in presenting speech-derived signals as haptic input. In the current study, we investigated whether the presentation of a vibrotactile signal derived from the speech temporal envelope can improve SiN intelligibility in a multi-talker background for untrained, normal-hearing listeners. We also determined if vibrotactile sensitivity, evaluated using vibrotactile detection thresholds, modulates the extent of audio-tactile SiN improvement. In practice, we measured participants’ speech recognition in a multi-talker noise without (audio-only) and with (audio-tactile) concurrent vibrotactile stimulation delivered in three schemes: to the left or right palm, or to both. Averaged across the three stimulation delivery schemes, the vibrotactile stimulation led to a significant improvement of 0.41 dB in SiN recognition when compared to the audio-only condition. Notably, there were no significant differences observed between the improvements in these delivery schemes. In addition, audio-tactile SiN benefit was significantly predicted by participants’ vibrotactile threshold levels and unimodal (audio-only) SiN performance. The extent of the improvement afforded by speech-envelope-derived vibrotactile stimulation was in line with previously uncovered vibrotactile enhancements of SiN perception in untrained listeners with no known hearing impairment. Overall, these results highlight the potential of concurrent vibrotactile stimulation to improve SiN recognition, especially in individuals with poor SiN perception abilities, and tentatively more so with increasing tactile sensitivity. Moreover, they lend support to the multimodal accounts of speech perception and research on tactile speech aid devices.I. Sabina Răutu is supported by the Fonds pour la formation à la recherche dans l’industrie et l’agriculture (FRIA), Fonds de la Recherche Scientifique (FRS-FNRS), Brussels, Belgium. Xavier De Tiège is Clinical Researcher at the FRS-FNRS. This research project has been supported by the Fonds Erasme (Research convention “Les Voies du Savoir 2”, Brussels, Belgium)

    Cortical tracking of lexical speech units in a multi-talker background is immature in school-aged children

    Get PDF
    Available online 1 December 2022Children have more difficulty perceiving speech in noise than adults. Whether this difficulty relates to an immature processing of prosodic or linguistic elements of the attended speech is still unclear. To address the impact of noise on linguistic processing per se, we assessed how babble noise impacts the cortical tracking of intelligible speech devoid of prosody in school-aged children and adults. Twenty adults and twenty children (7-9 years) listened to synthesized French monosyllabic words presented at 2.5 Hz, either randomly or in 4-word hierarchical structures wherein 2 words formed a phrase at 1.25 Hz, and 2 phrases formed a sentence at 0.625 Hz, with or without babble noise. Neuromagnetic responses to words, phrases and sentences were identified and source-localized. Children and adults displayed significant cortical tracking of words in all conditions, and of phrases and sentences only when words formed meaningful sentences. In children compared with adults, the cortical tracking was lower for all linguistic units in conditions without noise. In the presence of noise, the cortical tracking was similarly reduced for sentence units in both groups, but remained stable for phrase units. Critically, when there was noise, adults increased the cortical tracking of monosyllabic words in the inferior frontal gyri and supratemporal auditory cortices but children did not. This study demonstrates that the difficulties of school-aged children in understanding speech in a multi-talker background might be partly due to an immature tracking of lexical but not supra-lexical linguistic units.Maxime Niesen and Marc Vander Ghinst were supported by the Fonds Erasme (Brussels, Belgium). Mathieu Bourguignon and Julie Ber- tels have been supported by the program Attract of Innoviris (grants 2015-BB2B-10 and 2019-BFB-110). Julie Bertels has been supported by a research grant from the Fonds de Soutien Marguerite-Marie Delacroix (Brussels, Belgium). Xavier De Tiège is Clinical Researcher at the Fonds de la Recherche Scientifique (FRS-FNRS, Brussels, Belgium). We warmly thank Mélina Houinsou Hans for her statistical support during the re- view process

    The role of reading experience in atypical cortical tracking of speech and speech-in-noise in dyslexia

    Get PDF
    Available online 5 March 2022Dyslexia is a frequent developmental disorder in which reading acquisition is delayed and that is usually associ- ated with difficulties understanding speech in noise. At the neuronal level, children with dyslexia were reported to display abnormal cortical tracking of speech (CTS) at phrasal rate. Here, we aimed to determine if abnormal tracking relates to reduced reading experience, and if it is modulated by the severity of dyslexia or the presence of acoustic noise. We included 26 school-age children with dyslexia, 26 age-matched controls and 26 reading-level matched controls. All were native French speakers. Children’s brain activity was recorded with magnetoencephalography while they listened to continuous speech in noiseless and multiple noise conditions. CTS values were compared between groups, conditions and hemispheres, and also within groups, between children with mild and severe dyslexia. Syllabic CTS was significantly reduced in the right superior temporal gyrus in children with dyslexia com- pared with controls matched for age but not for reading level. Severe dyslexia was characterized by lower rapid automatized naming (RAN) abilities compared with mild dyslexia, and phrasal CTS lateralized to the right hemi- sphere in children with mild dyslexia and all control groups but not in children with severe dyslexia. Finally, an alteration in phrasal CTS was uncovered in children with dyslexia compared with age-matched controls in babble noise conditions but not in other less challenging listening conditions (non-speech noise or noiseless conditions); no such effect was seen in comparison with reading-level matched controls. Overall, our results confirmed the finding of altered neuronal basis of speech perception in noiseless and babble noise conditions in dyslexia compared with age-matched peers. However, the absence of alteration in comparison with reading-level matched controls demonstrates that such alterations are associated with reduced reading level, suggesting they are merely driven by reduced reading experience rather than a cause of dyslexia. Finally, our result of altered hemispheric lateralization of phrasal CTS in relation with altered RAN abilities in severe dyslexia is in line with a temporal sampling deficit of speech at phrasal rate in dyslexia.Florian Destoky, Julie Bertels and Mathieu Bourguignon have been supported by the program Attract of Innoviris (Grants 2015-BB2B-10 and 2019-BFB-110). Julie Bertels has been supported by a research grant from the Fonds de Soutien Marguerite-Marie Delacroix (Brussels, Bel- gium). Xavier De Tiège is Post-doctorate Clinical Master Specialist at the Fonds de la Recherche Scientifique (F.R.S.-FNRS, Brussels, Belgium). Mathieu Bourguignon has been supported by the Marie Sk ł odowska- Curie Action of the European Commission (Grant 743562). The MEG project at the CUB Hôpital Erasme and this study were financially supported by the Fonds Erasme (Research convention “Les Voies du Savoir ”, Brussels, Belgium). The PET-MR project at the CUB Hôpital Erasme is supported by the Association Vinçotte Nuclear (AVN, Brussels, Belgium)

    Cortical tracking of speech in noise accounts for reading strategies in children

    Get PDF
    Humans’ propensity to acquire literacy relates to several factors, including the ability to understand speech in noise (SiN). Still, the nature of the relation between reading and SiN perception abilities remains poorly understood. Here, we dissect the interplay between (1) reading abilities, (2) classical behavioral predictors of reading (phonological awareness, phonological memory, and rapid automatized naming), and (3) electrophysiological markers of SiN perception in 99 elementary school children (26 with dyslexia). We demonstrate that, in typical readers, cortical representation of the phrasal content of SiN relates to the degree of development of the lexical (but not sublexical) reading strategy. In contrast, classical behavioral predictors of reading abilities and the ability to benefit from visual speech to represent the syllabic content of SiN account for global reading performance (i.e., speed and accuracy of lexical and sublexical reading). In individuals with dyslexia, we found preserved integration of visual speech information to optimize processing of syntactic information but not to sustain acoustic/phonemic processing. Finally, within children with dyslexia, measures of cortical representation of the phrasal content of SiN were negatively related to reading speed and positively related to the compromise between reading precision and reading speed, potentially owing to compensatory attentional mechanisms. These results clarify the nature of the relation between SiN perception and reading abilities in typical child readers and children with dyslexia and identify novel electrophysiological markers of emergent literacy

    Rapid detection of snakes modulates spatial orienting in infancy

    Get PDF
    Recent evidence for an evolved fear module in the brain comes from studies showing that adults, children and infants detect evolutionarily threatening stimuli such as snakes faster than non-threatening ones. A decisive argument for a threat detection system efficient early in life would come from data showing, in young infants, a functional threat-detection mechanism in terms of “what” and “where” visual pathways. The present study used a variant of Posner’s cuing paradigm, adapted to 7–11-month-olds. On each trial, a threat-irrelevant or a threat-relevant cue was presented (a flower or a snake, i.e., “what”). We measured how fast infants detected these cues and the extent to which they further influenced the spatial allocation of attention (“where”). In line with previous findings, we observed that infants oriented faster towards snake than flower cues. Importantly, a facilitation effect was found at the cued location for flowers but not for snakes, suggesting that these latter cues elicit a broadening of attention and arguing in favour of sophisticated “what–where” connections. These results strongly support the claim that humans have an early propensity to detect evolutionarily threat-relevant stimuli

    Learning the association between a context and a target location in infancy

    No full text
    info:eu-repo/semantics/nonPublishe

    Fast and slow attentional effects of emotional spoken words in the emotional Stroop task

    No full text
    info:eu-repo/semantics/nonPublishe

    Influence of the emotional valence of auditory stimuli on attentional orienting

    No full text
    L’objectif de ce travail de thèse était d’investiguer l’influence de la valence émotionnelle négative, positive ou taboue des mots parlés sur l’orientation des ressources attentionnelles, dans la population tout-venant. Pour ce faire, j’ai élaboré des adaptations auditives de paradigmes expérimentaux qui avaient été utilisés auparavant dans le but d’explorer l’influence du contenu émotionnel de stimuli visuels sur l’allocation de l’attention :le paradigme de déploiement de l’attention (Etudes 1 et 3), le paradigme de Stroop émotionnel (Etude 2) et le paradigme d’indiçage spatial émotionnel (Etude 4). <p>En particulier, les Etudes 1, 3 et 4 m’ont permis d’examiner l’influence de la valence émotionnelle de ces stimuli sur l’attention sélective à une localisation spatiale, évaluée au travers des réponses à une cible subséquente. <p>Dans la situation de compétition pour les ressources attentionnelles spécifique au paradigme de déploiement de l’attention (Etudes 1 et 3), nous avons observé un engagement préférentiel des ressources attentionnelles vers la localisation spatiale des mots tabous, lorsque ceux-ci étaient présentés à droite, par rapport à la localisation spatiale des mots neutres présentés conjointement. Ces biais attentionnels ont été observés quelle que soit l’attention portée volontairement aux stimuli, la nature de la tâche à réaliser sur la cible, ou la charge cognitive liée à la tâche. De tels biais ont également été observés envers la localisation spatiale des mots négatifs et positifs, mais de manière moins robuste. Lorsque deux stimuli rivalisent pour l’orientation des ressources, la valence choquante serait donc cruciale pour l’orientation de l’attention spatiale. De plus, les mots tabous induisent un ralentissement général des temps de réaction (TRs) à la cible subséquente, quelle que soit sa localisation spatiale.<p>Au contraire, lorsque des mots-indices sont présentés isolément dans le paradigme d’indiçage spatial émotionnel (Etude 4), la valence émotionnelle négative des mots, mais pas leur valence choquante, paraît cruciale pour l’observation d’effets spatiaux :les stimuli les plus négatifs moduleraient l’orientation spatiale automatique de l’attention suscitée par leur présentation périphérique. Plus précisément, ils empêcheraient l’application de processus attentionnels inhibiteurs des localisations déjà explorées. En outre, la présentation d’un indice périphérique négatif accélère le traitement d’une cible subséquente, quelle que soit sa localisation spatiale.<p>L’influence de la dimension émotionnelle des mots parlés sur l’attention sélective à une dimension (non-émotionnelle) de ces stimuli a été investiguée grâce au paradigme de Stroop émotionnel (Etude 2). Contrairement à mes autres études, aucun déplacement attentionnel spatial n’était impliqué dans cette situation puisque les participants devaient répondre à chaque essai à une dimension non-émotionnelle (l’identité du locuteur) du stimulus (potentiellement émotionnel) présenté. J’ai ainsi observé une influence de la dimension émotionnelle taboue ou négative des mots sur le traitement de la dimension pertinente d’un mot neutre subséquent, mais pas sur le traitement de la dimension pertinente de ces mots eux-mêmes, suggérant l’occurrence d’effets lents, inter-essais, des mots tabous et négatifs, mais pas d’effet rapide. <p>Ces données appuient donc l’existence, dans une population tout-venant, d’un mécanisme de traitement involontaire du contenu émotionnel des mots parlés qui influence non seulement l’orientation spatiale et dimensionnelle de l’attention mais également, de manière plus générale, la latence des réponses fournies par le sujet.<p>Doctorat en Sciences Psychologiques et de l'éducationinfo:eu-repo/semantics/nonPublishe

    Fast and slow attentional effects of emotional spoken words in the emotional Stroop task

    No full text
    info:eu-repo/semantics/publishe

    Does the Perruchet effect indicate a dissociation between learning and awareness?

    No full text
    info:eu-repo/semantics/nonPublishe
    corecore